-
Notifications
You must be signed in to change notification settings - Fork 13.7k
common: Generalized XML-style tool-call parsing with streaming support (GLM 4.5/4.6 + MiniMax M2 + SeedOSS + Kimi-K2 + Qwen3-Coder + Apriel-1.5 + Xiaomi-MiMo) #16932
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
|
I'm looking forward to get this PR merged! @hksdpc255 Does it require a custom jinja template from the previous PR or it works good as is? |
|
For now, I’d recommend using a custom template if you’re running more complex workloads. Edit: The official template is now working properly. There’s no longer need for a custom template. Edit2: Official template support for Minimax-M2 has been removed. See comment and ochafik/minja#7 (comment) for details. |
|
FYI I've updated (my fork of) Minja w/ support for GLM 4.6's template. |
|
@ochafik Excellent work! Once llama.cpp syncs your changes, some parts of this PR can be safely removed. However, there are still a few small patches needed — for example, replacing |
|
Currently, the official Minimax-M2 chat template fails to run tool calls because |
@hksdpc255 Both should be supported. The confusing error you probably got was because minja implements As for And please feel free to file bugs on https://github.com/ochafik/minja, it's should be cleaner to add syntax support there than to patch things up in llama.cpp. |
|
@ochafik Thank you for pointing that out. I’m currently applying your suggested fix in llama.cpp and will test whether it works as expected. Thanks again for the help! |
|
Good news! The Minimax M2 tool call is now working. I’ll push the fix later. |
|
Model: unsloth's UD-Q3_K_XL |
|
Hi @hksdpc255 , Model: unsloth--MiniMax-M2-GGUF Q8_0 ./llama-cli \
-m /models/hub/models--unsloth--MiniMax-M2-GGUF/snapshots/*/Q8_0/MiniMax-M2-Q8_0-00001-of-00005.gguf \
-ngl 99 \
-sm layer \
-ts 1,1,1,1,1,1,1,1 \
-c 78000 \
-t 16 \
--jinja \
-iOutput: > what is the capital of france?
Okay, the user asked a straightforward question: "What is the capital of France?" This is basic geography knowledge, so the answer should be simple. I don't need to overcomplicate things.
Hmm, maybe the user is just testing if I know basic facts, or perhaps they're new to this kind of question. Either way, the response should be clear and concise. No need for extra details unless they ask follow-ups.
I recall that Paris is the capital of France. It's one of the most well-known capitals globally, so this should be an easy one. The user might be a student working on homework, or someone prepping for trivia. Or maybe they're just curious—either way, I should confirm it confidently.
No signs of confusion or deeper needs here. The question is very direct. I'll just state the answer plainly. If they want more info later, like landmarks or history, they'll ask. For now, keep it simple: Paris is the capital.
Wait, should I add that it's also a major cultural hub? Nah, overcomplicating it. Just the fact. Done.
</think>
The capital of France is **Paris**.
Paris is not only the political center but also a major cultural, economic, and gastronomic hub, famous for landmarks like the Eiffel Tower, the Louvre Museum, Notre-Dame Cathedral, and the Champs-Élysées. |
|
@emuchogu Sorry, I haven’t tested it with If you want I’m not sure whether |
|
I’ve reverted my previous PR (reasoning-format-minimax-m2) and merged PR #16932 into my testing-branch16 for isolated testing. Without this PR :Streaming, no initial <think> tag in the output: Curl without streaming no initial <think> tag in the output : With this PR :Streaming : Curl without streaming, no initial <think> tag in the output : |
|
Oh! It seems you’re using non-streaming mode. I can now reproduce your issue with Let me dig into what’s happening… |
Yes, exactly: it works correctly in streaming mode (tested through the SvelteUI, which specifically designed to be debug-friendly without needing curl -N), but not in non-streaming mode. |
|
Toolcall debug on SvelteUI with your #16932 + #16618 :) Custom JSON :
|
|
@ServeurpersoCom The problem is that I added some code that makes it fall back to llama.cpp’s original parser when there are no tools, so the new parser is never called. Lines 2748 to 2753 in af5216e
Simply deleting the code above should fix the issue. I’ll run more tests before pushing a new commit.
|
I’ve successfully tested it without these lines of code and confirmed it works as expected for streaming / non streaming / reasoning_content / toolcall |
|
I just realized this, and it seems strange: shouldn’t --reasoning-format none completely bypass any parsing logic instead of still going through it? It’s meant to be the raw passthrough mode for observing the model’s native output. The .cpp files are already becoming huge and monolithic, making them harder to touch or refactor safely. The --reasoning-format options are also poorly named and not very explicit. In the long run, a modular templating system would help avoid piling up even more C++ parsing code. If this work is meant to unify several next-generation parsers, maybe we could add a new keyword to --reasoning-format instead? It’s important to keep none as a truly no-parsing mode, since it’s essential for debugging new models. Also, the current "auto" mode is actually just "deepseek" in practice, so it might be clearer to rename or document it that way to avoid confusion: and your unified detection logic could be implemented directly under auto (or deepseek, since they’re basically aliases) ? |
|
I build and ran your fork/branch successfully on my strix halo AMD APU.
The reasoning now correctly starts with the I tried the web search tool in OpenWebUI. It works with I will later have a deeper look into this. I skimmed the comments in this PR and it said something about customizing the chat template. If anyone has a more specific hint, this of course would be welcome. |
I doubt that will work, the problem isn't
Don't livepatch, but it's ok to provide a patched template in |
|
@semidark As you can see from the maintainers’ comments (cc CISC), I cannot fix the official chat template by live patch. Please make sure you’re using the chat template provided in this PR. You may need to add a parameter such as |
|
Thanks for the hint. I just builded a container with the patch and did not even have a look at the provided files in this PR 😬. Sorry for that. I was so hyped to get this up and running, that I fiddled it all together on my android Smartphone with termux as SSH client. 📳 I will try MiniMax-M2.jinja then. UPDATE: WebSearch Tool call in OpenWebUI now works with the MiniMax-M2.jinja template. 🤩 |
|
@lainwir3d Could you share a bit more information about how you’re running llama-server? The chat context will also be helpfull. |
|
@lainwir3d Try the new template, I'm not sure if it will fix your problem. |
|
hey @hksdpc255 , here is how I start it. Will try with template fix now |
A simpler fix is just fixing the set, or wrapping it in {% set _args = tc.arguments or {} %} |
Thank you for your help. I will apply your fix instead. |
|
Not better here: Is that embedded / inline python? I can maybe try to fix myself |
Strange, that would indicate that |
|
@lainwir3d if you can manage to output the whole Edit: Could of course also be that the model decides to output an unparsable tool call, which would end up a string. |
CISC
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very cool, and probably much needed. I'd imagine it takes a little effort to add the necessary logic for all the supported models though, so nice to have this added to the baseline first? |
Agreed, there is clear desire from the community for these models. |
|
Looks like a tool call is failing. Could this be the issue? Sorry, trying to get my head around this I'm new to all this LLMs / tool calls / etc stuff arguments for read_file seems broken... |
Yep, that would do it. |
|
Yes, I imagine the comprehensive refactor @aldehir is working on will take some time and in the meanwhile, people really want a working version of tool calling for those models, so I'll say let's go for it, especially since @hksdpc255 put in a lot of work and accomodated all the suggestions. |
Co-authored-by: Sigbjørn Skjæret <[email protected]>
Co-authored-by: Sigbjørn Skjæret <[email protected]>
Could you try this? Or even further, use the fixed template here: #15904 |






Generalized and streaming-capable XML-style tool-call parsing with grammar enforcement and automatic template fixing.
Based on PR #15904, this patch introduces a generalized implementation for almost all XML-style tool-call formats.
Supported models
Grammar-constrained tool-call outputs
Tool-call messages generated by the model are now strictly validated against a defined grammar.
A new automatic grammar generator simplifies the process of creating grammars for new models.
This ensures that all tool-call outputs are well-formed, structurally consistent, and reliably parsed.
Streaming support for tool-call parsing
The parser now supports streaming parsing, enabling incremental processing of tool-call messages as they are generated.
This enhancement improves responsiveness and allows real-time interaction during model inference.
Automatic chat-template fixing
A lightweight Jinja2-based patcher has been added to automatically fix official chat templates before use.
With this change, official templates now work out of the box, eliminating the need for custom modifications.
In-context reasoning
The parser now supports multiple reasoning blocks within a single generation, even when interleaved with tool calls.
All reasoning content is preserved. No information is lost during parsing or streaming.
Enhanced unit tests
Add unit test for streaming-mode parser. It simulates the generation phase by feeding content character-by-character, comparing the parsed results and verifying that streaming and non-streaming modes reach the same final state.
Additional Notes
--reasoning-format none-lv 1in the command line to enable more detailed logging.Please use the chat template included in this PR, or any other chat template that you are certain will work correctly